35 research outputs found

    LHCb calorimeters: Technical Design Report

    Get PDF

    LHCb magnet: Technical Design Report

    Get PDF

    LHCb RICH: Technical Design Report

    Get PDF

    LHCb inner tracker: Technical Design Report

    Get PDF

    LHCb muon system: Technical Design Report

    Get PDF

    Trigger and data acquisition

    No full text
    The first part of the talk will be an overview of trigger and data acquisition for collider experiments with an historical perspective including trigger rates, data volumes and standard solutions. Then, some front-end essentials like digitizers, signal processing, readout structures, timing and control, etc. will be developed in more detail. The talk will continue with trigger algorithms and some implementation examples from current experiments. After that, the different components of a generic data acquisition system will be reviewed. Then we will talk about readout networks (buses, switches...) event builders, event filters, data storage, etc. Finally, we will cover software related aspects like system configuration and operating modes, run control, data monitoring and quality control

    Novel functional and distributed approaches to data analysis available in ROOT

    No full text
    The bright future of particle physics at the Energy and Intensity frontiers poses exciting challenges to the scientific software community. The traditional strategies for processing and analysing data are evolving in order to (i) offer higher-level programming model approaches and (ii) exploit parallelism to cope with the ever increasing complexity and size of the datasets. This contribution describes how the ROOT framework, a cornerstone of software stacks dedicated to particle physics, is preparing to provide adequate solutions for the analysis of large amount of scientific data on parallel architectures. The functional approach to parallel data analysis provided with the ROOT TDataFrame interface is then characterised. The design choices behind this new interface are described also comparing with other widely adopted tools such as Pandas and Apache Spark. The programming model is illustrated highlighting the reduction of boilerplate code, composability of the actions and data transformations as well as the capabilities of dealing with different data sources such as ROOT, JSON, CSV or databases. Details are given about how the functional approach allows transparent implicit parallelisation of the chain of operations specified by the user. The progress done in the field of distributed analysis is examined. In particular, the power of the integration of ROOT with Apache Spark via the PyROOT interface is shown. In addition, the building blocks for the expression of parallelism in ROOT are briefly characterised together with the structural changes applied in the building and testing infrastructure which were necessary to put them in production
    corecore